This tutorial introduces the influence metric, a measure of indirect connectivity developed and used in the BANC paper (Eckstein et al., 2025).
While direct synaptic connections tell us which neurons are connected, they don’t capture the full picture of how signals propagate through neural circuits. The influence metric quantifies how strongly a neuron or group of neurons can affect downstream targets through both direct and indirect pathways.
The influencer package implements a linear dynamical model of neural signal propagation:
Model equation: τ dr(t)/dt = (W - I)r(t) + s(t)
Where: - r(t) = neural activity vector - W = connectivity matrix (scaled by synapse counts) - s(t) = stimulation input to seed neurons - τ = time constant
At steady state, the influence score equals:
r∞ = -(W̃ - I)⁻¹s
Where W̃ is rescaled to ensure network stability. Results are log-transformed with a constant (+24) to produce “adjusted influence” scores above zero.
Currently working with dataset: banc_746 Data location: gs://sjcabs_2025_data (Google Cloud Storage)
One way in which we can use these indirect measures, is to understand how sensory neurons from across the body, and effector neuron that project across the body, relate to other neurons of the central nervous system.
This can help us interpret what “deeper” neurons “care” about, or what behaviours they may inform.
As a reminder, sensory and effector neurons, our
cell_class and cell_sub_class labels can tell
us about innervation of exterior body parts.
As well as internal ones.
vv
Make sure influencer is installed, and that you have
installed what we need for the python backend, with:
install_python_influence_calculator() (should just need to
run once).
First, we need to our data:
# Setup GCS access if needed
if (use_gcs) {
setup_gcs_access()
}
## Setting up GCS access...
## Using conda Python environment...
## ✓ GCS access configured
## ✓ Python warnings suppressed
# Load metadata
meta_path <- construct_path(data_path, dataset, "meta")
meta <- read_feather_gcs(meta_path, use_gcs = use_gcs)
## Authenticating with Google Cloud Storage...
## Reading from GCS: gs://sjcabs_2025_data/banc/banc_746_meta.feather
## Downloading from GCS... (this may take several minutes for large files)
## Downloaded 9.84 MB from GCS
## Loading into memory with Arrow...
## ✓ Done! Loaded 168791 rows
# Load edgelist
edgelist_path <- construct_path(data_path, dataset, "edgelist_simple")
edgelist_simple <- read_feather_gcs(edgelist_path, use_gcs = use_gcs)
## Authenticating with Google Cloud Storage...
## Reading from GCS: gs://sjcabs_2025_data/banc/banc_746_simple_edgelist.feather
## Downloading from GCS... (this may take several minutes for large files)
## Downloaded 3429.91 MB from GCS
## Loading into memory with Arrow...
## ✓ Done! Loaded 113981973 rows
# Show edgelist structure
cat("\nEdgelist columns:", paste(colnames(edgelist_simple), collapse = ", "), "\n")
##
## Edgelist columns: pre, post, count, norm, total_input
head(edgelist_simple, 3)
## # A tibble: 3 × 5
## pre post count norm total_input
## <chr> <chr> <int> <dbl> <int>
## 1 720575941509220642 720575941277394247 1 1 1
## 2 720575941526837604 720575940420901192 1 1 1
## 3 720575941508750721 720575941576493706 1 0.5 2
To speed up influence calculations, we filter out weak connections (fewer than 5 synapses):
# Filter for connections with at least 5 synapses
edgelist_filtered <- edgelist_simple %>%
filter(count >= 5)
cat("Original connections:", nrow(edgelist_simple), "\n")
## Original connections: 113981973
cat("After filtering (≥5 synapses):", nrow(edgelist_filtered), "\n")
## After filtering (≥5 synapses): 1953550
cat("Retained:", round(100 * nrow(edgelist_filtered) / nrow(edgelist_simple), 1), "%\n")
## Retained: 1.7 %
Let’s examine how sensory neurons influence mushroom body dopaminergic neurons. This is biologically relevant because:
# Source: All sensory neurons (afferent flow)
sensory_neurons <- meta %>%
filter(flow == "afferent") %>%
distinct(!!sym(dataset_id), cell_sub_class, cell_type)
cat("Found", nrow(sensory_neurons), "sensory neurons\n")
## Found 15462 sensory neurons
# Get unique sensory sub-classes
sensory_sub_classes <- sensory_neurons %>%
pull(cell_sub_class) %>%
unique() %>%
sort()
cat("Sensory sub-classes (n=", length(sensory_sub_classes), "):\n")
## Sensory sub-classes (n= 109 ):
cat(paste(head(sensory_sub_classes, 10), collapse = ", "), "\n\n")
## abdomen_multidendritic_neuron, abdomen_orphan_neuron, abdomen_oxygenation_neuron, abdomen_strand_neuron, abdominal_ppk_neuron, abdominal_terminalia_bristle, abdominal_wall_multidendritic_neuron, antenna_bristle_neuron, antenna_campaniform_sensillum_neuron, antenna_hygrosensory_receptor_neuron
# Target: All mushroom body dopaminergic neurons
mb_dopamine_neurons <- meta %>%
filter(cell_class == "mushroom_body_dopaminergic_neuron") %>%
distinct(!!sym(dataset_id), cell_sub_class, cell_type)
cat("Found", nrow(mb_dopamine_neurons), "mushroom body dopamine neurons\n")
## Found 255 mushroom body dopamine neurons
# Get unique MB dopamine types
mb_da_types <- mb_dopamine_neurons %>%
pull(cell_type) %>%
unique() %>%
sort()
cat("MB dopamine types (n=", length(mb_da_types), "):\n")
## MB dopamine types (n= 35 ):
cat(paste(head(mb_da_types, 10), collapse = ", "), "\n")
## PAM01, PAM01_b, PAM02, PAM03, PAM04, PAM04_a, PAM05, PAM06, PAM06_b, PAM07
# Prepare data for influencer package
# The Python influencer package expects specific formats:
# - edgelist with pre, post, count (or norm) columns
# - meta with root_id column
# Check edgelist column names
edgelist_cols <- colnames(edgelist_filtered)
cat("Edgelist columns:", paste(edgelist_cols, collapse = ", "), "\n")
## Edgelist columns: pre, post, count, norm, total_input
# Determine pre/post column names and rename if needed
if ("pre" %in% edgelist_cols && "post" %in% edgelist_cols) {
edgelist_for_ic <- edgelist_filtered
} else {
# Need to rename columns
pre_col <- paste0("pre_", dataset_id)
post_col <- paste0("post_", dataset_id)
edgelist_for_ic <- edgelist_filtered %>%
rename(
pre = !!sym(pre_col),
post = !!sym(post_col)
)
}
# Prepare metadata with root_id column
meta_for_ic <- meta %>%
rename(root_id = !!sym(dataset_id))
cat(" Edgelist:", nrow(edgelist_for_ic), "connections\n")
## Edgelist: 1953550 connections
cat(" Metadata:", nrow(meta_for_ic), "neurons\n\n")
## Metadata: 168791 neurons
# Initialize the influence calculator
# This uses the Python backend (ConnectomeInfluenceCalculator)
ic_dataset <- influence_calculator_py(
edgelist_simple = edgelist_for_ic,
meta = meta_for_ic
)
Now we calculate influence scores from each sensory sub-class to all MB dopaminergic neurons:
cat("Note: This will take time - influence calculations involve matrix operations on the full network\n\n")
## Note: This will take time - influence calculations involve matrix operations on the full network
# Get MB dopamine neuron IDs for filtering
mb_dopamine_ids <- mb_dopamine_neurons %>%
pull(!!sym(dataset_id))
# Calculate influence for each sensory sub-class
all_influence_scores_list <- list()
for (i in seq_along(sensory_sub_classes)) {
sensory_sub_class <- sensory_sub_classes[i]
# Progress indicator
if (i %% 5 == 0 || i == length(sensory_sub_classes)) {
}
# Get IDs for this sensory sub-class
sensory_ids <- sensory_neurons %>%
filter(cell_sub_class == sensory_sub_class) %>%
pull(!!sym(dataset_id))
# Skip if no neurons found
if (length(sensory_ids) == 0) next
# Calculate influence from this sensory sub-class
# calculate_influence_py returns a data frame with columns:
# - root_id: target neuron ID
# - Influence_score_(unsigned): raw influence score
# - adjusted_influence: log-transformed influence (more interpretable)
influence_scores <- calculate_influence_py(ic_dataset, sensory_ids) %>%
filter(id %in% mb_dopamine_ids) %>%
left_join(
meta %>%
distinct(!!sym(dataset_id), .keep_all = TRUE) %>%
select(id = !!sym(dataset_id), target_class = cell_sub_class, target_type = cell_type),
by = "id"
) %>%
mutate(source = sensory_sub_class)
all_influence_scores_list[[sensory_sub_class]] <- influence_scores
}
# Combine all results
all_influence_scores <- bind_rows(all_influence_scores_list)
# Show sample of results
all_influence_scores %>%
select(source, id, `Influence_score_(unsigned)`, adjusted_influence, target_type) %>%
head(10)
## source id Influence_score_(unsigned)
## 1 abdomen_multidendritic_neuron 720575941477857076 4.367597e-05
## 2 abdomen_multidendritic_neuron 720575941536157930 2.102865e-05
## 3 abdomen_multidendritic_neuron 720575941527558500 8.428382e-06
## 4 abdomen_multidendritic_neuron 720575941445894826 3.925197e-06
## 5 abdomen_multidendritic_neuron 720575941689127692 3.478212e-05
## 6 abdomen_multidendritic_neuron 720575941552246783 1.998037e-05
## 7 abdomen_multidendritic_neuron 720575941685802735 5.199819e-08
## 8 abdomen_multidendritic_neuron 720575941430216073 4.897231e-08
## 9 abdomen_multidendritic_neuron 720575941596156965 2.240533e-06
## 10 abdomen_multidendritic_neuron 720575941655019732 5.681189e-06
## adjusted_influence target_type
## 1 13.961287 PPL102
## 2 13.230375 PPL108
## 3 12.316094 PPL103
## 4 11.551906 PAM02
## 5 13.733593 PPL101
## 6 13.179240 PPL101
## 7 7.227943 PAM06
## 8 7.167989 PAM02
## 9 10.991203 PPL107
## 10 11.921650 PPL202
# Aggregate influence scores by source and target cell type
all_influence_scores_ct <- all_influence_scores %>%
left_join(meta %>%
distinct(cell_sub_class, .keep_all = TRUE) %>%
select(source=cell_sub_class, source_class = cell_class),
by="source") %>%
group_by(source_class, target_type) %>%
summarise(
influence = sum(`Influence_score_(unsigned)`, na.rm = TRUE),
adjusted_influence = log(sum(influence, na.rm = TRUE))+24,
adjusted_influence = ifelse(is.infinite(adjusted_influence),0,adjusted_influence),
n_targets = n(),
.groups = "drop"
) %>%
filter(!is.na(target_type), !is.na(source_class))
# Show top influences
all_influence_scores_ct %>%
arrange(desc(adjusted_influence)) %>%
select(source_class, target_type, adjusted_influence, n_targets) %>%
head(10)
## # A tibble: 10 × 4
## source_class target_type adjusted_influence n_targets
## <chr> <chr> <dbl> <int>
## 1 olfactory_receptor_neuron PPL202 21.0 4
## 2 olfactory_receptor_neuron PPL201 20.4 6
## 3 olfactory_receptor_neuron PAM07 20.0 14
## 4 hygrosensory_receptor_neuron PPL202 19.6 2
## 5 internal_taste_sensillum_neuron PAM11 19.4 12
## 6 olfactory_receptor_neuron PPL203 19.4 2
## 7 thermosensory_receptor_neuron PPL203 19.1 2
## 8 olfactory_receptor_neuron PAM02 19.1 26
## 9 olfactory_receptor_neuron PAM05 19.0 24
## 10 thermosensory_receptor_neuron PPL202 19.0 4
Let’s create an interactive heatmap showing influence from sensory sub-classes to dopamine neuron types:
# Create a matrix for heatmap
influence_matrix <- all_influence_scores_ct %>%
select(source_class, target_type, adjusted_influence) %>%
pivot_wider(names_from = target_type, values_from = adjusted_influence, values_fill = 0) %>%
column_to_rownames("source_class") %>%
as.matrix()
influence_matrix[is.na(influence_matrix)] <- 0
influence_matrix[is.infinite(influence_matrix)] <- 0
influence_matrix <- influence_matrix[rowSums(influence_matrix)>100,]
# We want to ask what the most prominent influences onto DANs is
# So we can minmax normalise influence_matrix
influence_matrix_norm <- apply(X = influence_matrix, MARGIN = 2, FUN = function(x) (x-min(x,na.rm=TRUE))/(max(x,na.rm=TRUE)-min(x,na.rm=TRUE)))
influence_matrix_norm[is.infinite(influence_matrix_norm)] <- 0
influence_matrix_norm[is.na(influence_matrix_norm)] <- 0
# Create static heatmap with pheatmap (saved to PNG)
pheatmap(
influence_matrix_norm,
clustering_method = "ward.D2",
# color = colorRampPalette(c("navy", "blue", "cyan", "yellow", "orange", "red"))(256),
show_rownames = TRUE,
show_colnames = TRUE,
main = "Sensory Influence on MB Dopaminergic Neurons",
filename = file.path(img_dir, paste0(dataset, "_influence_heatmap.png")),
width = 10,
height = 10
)
# Create interactive heatmap with Ward's clustering (no dendrograms shown)
p_influence_heatmap <- heatmaply(
influence_matrix_norm,
dendrogram = "none",
hclust_method = "ward.D2",
#colors = colorRampPalette(c("navy", "blue", "cyan", "yellow", "orange", "red"))(256),
main = "Sensory Influence on MB Dopaminergic Neurons",
xlab = "Target: MB Dopamine Neuron Type",
ylab = "Source: Sensory Sub-Class",
showticklabels = c(TRUE, TRUE),
hide_colorbar = FALSE,
fontsize_row = 8,
fontsize_col = 8,
key.title = "Normalized\nInfluence"
)
p_influence_heatmap
We can also visualise the influence patterns using UMAP, where each point is a dopaminergic neuron:
# Aggregate influence by individual neuron
all_influence_scores_n <- all_influence_scores %>%
group_by(source, id) %>%
summarise(
influence = sum(`Influence_score_(unsigned)`, na.rm = TRUE),
adjusted_influence = log(sum(influence, na.rm = TRUE))+24,
adjusted_influence = ifelse(is.infinite(adjusted_influence),0,adjusted_influence),
target_type = first(target_type),
target_class = first(target_class),
.groups = "drop"
) %>%
filter(!is.na(id), !is.na(source))
# Create matrix: rows = neurons, columns = sensory sub-classes
influence_matrix_umap <- all_influence_scores_n %>%
select(id, source, adjusted_influence) %>%
pivot_wider(names_from = source, values_from = adjusted_influence, values_fill = 0) %>%
column_to_rownames("id") %>%
as.matrix()
# Run UMAP (explicitly use umap package, not uwot)
set.seed(42)
umap_result <- umap::umap(influence_matrix_umap, n_neighbors = 15, min_dist = 0.1)
# Create data frame for plotting
umap_df <- data.frame(
id = rownames(influence_matrix_umap),
UMAP1 = umap_result$layout[, 1],
UMAP2 = umap_result$layout[, 2]
) %>%
left_join(
meta %>%
distinct(!!sym(dataset_id), .keep_all = TRUE) %>%
select(id = !!sym(dataset_id), cell_type, cell_sub_class, cell_class),
by = "id"
)
# Plot by cell sub-class
p_umap_subclass <- ggplot(umap_df, aes(x = UMAP1, y = UMAP2, color = cell_sub_class)) +
geom_point(alpha = 0.7, size = 2.5) +
labs(
title = "UMAP of MB Dopamine Neurons by Sensory Influence Patterns",
subtitle = paste("Coloured by cell sub-class (n =", nrow(umap_df), "neurons)"),
x = "UMAP 1",
y = "UMAP 2",
color = "Cell Sub-Class"
) +
theme_minimal(base_size = 12) +
theme(
legend.position = "right",
panel.grid.minor = element_blank()
)
print(p_umap_subclass)
save_plot(p_umap_subclass, paste0(dataset, "_influence_umap_subclass"))
# Plot by cell type
p_umap_type <- ggplot(umap_df, aes(x = UMAP1, y = UMAP2, color = cell_type)) +
geom_point(alpha = 0.7, size = 2.5) +
labs(
title = "UMAP of MB Dopamine Neurons by Sensory Influence Patterns",
subtitle = paste("Coloured by cell type (n =", nrow(umap_df), "neurons)"),
x = "UMAP 1",
y = "UMAP 2",
color = "Cell Type"
) +
theme_minimal(base_size = 12) +
theme(
legend.position = "right",
panel.grid.minor = element_blank()
)
save_plot(p_umap_type, paste0(dataset, "_influence_umap_type"))
print(p_umap_type)
Below are more involved analyses, with longer compute times. Working through these will show you how to think about sensory and effector influence together, plot a UMAP based on influence scores, and interpret the biology of our results.
pC1 neurons are a small cluster of sexually dimorphic, doublesex/fruitless-positive neurons in the Drosophila central brain that act as a hub for integrating social cues and controlling sex-specific internal state and behavior. In the male literature they are often referred to as the P1 cluster; in both sexes they sit at the top of a hierarchy that gates courtship, aggression, and related states.
Since BANC is a female nervous system and maleCNs is a male one, we can directly compare information flow onto this sexually dimorphic type, and compare.
We are interested in seeing which antennal lobe glomeruli (olfactory and thermosensory), and which gustatory neuron cell sub classes influence pC1 neurons, in both data sets.
First let’s read the BANC and maleCNS metadata, and select our pC1
neuron by doing a regex search of “pC1” on the cell_type
column.
# Data set 1: BANC (female)
dataset1 <- "banc_746"
dataset_id1 <- "banc_746_id"
meta_path1 <- construct_path(data_path, dataset1, "meta")
meta1 <- read_feather_gcs(meta_path1, use_gcs = use_gcs) %>%
rename(root_id = !!sym(dataset_id1))
## Authenticating with Google Cloud Storage...
## Reading from GCS: gs://sjcabs_2025_data/banc/banc_746_meta.feather
## Downloading from GCS... (this may take several minutes for large files)
## Downloaded 9.84 MB from GCS
## Loading into memory with Arrow...
## ✓ Done! Loaded 168791 rows
edgelist_path1 <- construct_path(data_path, dataset1, "edgelist_simple")
edgelist_simple1 <- read_feather_gcs(edgelist_path1, use_gcs = use_gcs) %>%
filter(norm >= 0.0001)
## Authenticating with Google Cloud Storage...
## Reading from GCS: gs://sjcabs_2025_data/banc/banc_746_simple_edgelist.feather
## Downloading from GCS... (this may take several minutes for large files)
## Downloaded 3429.91 MB from GCS
## Loading into memory with Arrow...
## ✓ Done! Loaded 113981973 rows
# Data set 2: maleCNS (male)
dataset2 <- "malecns_09"
dataset_id2 <- "malecns_09_id"
meta_path2 <- construct_path(data_path, dataset2, "meta")
meta2 <- read_feather_gcs(meta_path2, use_gcs = use_gcs) %>%
rename(root_id = !!sym(dataset_id2))
## Authenticating with Google Cloud Storage...
## Reading from GCS: gs://sjcabs_2025_data/malecns/malecns_09_meta.feather
## Downloading from GCS... (this may take several minutes for large files)
## Downloaded 12.23 MB from GCS
## Loading into memory with Arrow...
## ✓ Done! Loaded 165114 rows
edgelist_path2 <- construct_path(data_path, dataset2, "edgelist_simple")
edgelist_simple2 <- read_feather_gcs(edgelist_path2, use_gcs = use_gcs) %>%
filter(norm >= 0.0001)
## Authenticating with Google Cloud Storage...
## Reading from GCS: gs://sjcabs_2025_data/malecns/malecns_09_simple_edgelist.feather
## Downloading from GCS... (this may take several minutes for large files)
## Downloaded 1557.85 MB from GCS
## Loading into memory with Arrow...
## ✓ Done! Loaded 142881142 rows
We can then prime our edgelists for influence score calculation,
pruning out weak connections, this time taking
norm>=0.0001. We think norm will be a bit
more similar between datasets, count may vary more.
# Initialize influence calculators
ic_dataset1 <- influence_calculator_py(
edgelist_simple = edgelist_simple1,
meta = meta1
)
ic_dataset2 <- influence_calculator_py(
edgelist_simple = edgelist_simple2,
meta = meta2
)
Now let’s calculate influence scores from all cell types starting
with ^ORN|^THRN|^HRN, and all cell types which match for
gustatory in cell_function.
# Dataset 1 (BANC): Find pC1 neurons and sensory neurons
pc1_meta1 <- meta1 %>%
filter(grepl("^pC1|^P1_", cell_type, ignore.case = TRUE))
sens_meta1 <- meta1 %>%
filter(
!is.na(cell_type),
grepl("sensory",super_class),
!grepl("^IN|^MN",cell_type),
grepl("^ORN|^THRN|^HRN|^TRN", cell_type) |
grepl("gustatory", cell_function, ignore.case = TRUE)
) %>%
distinct(cell_type, .keep_all = TRUE)
# Dataset 2 (maleCNS): Find pC1 neurons and sensory neurons
pc1_meta2 <- meta2 %>%
filter(grepl("^pC1|^P1_", cell_type, ignore.case = TRUE))
sens_meta2 <- meta2 %>%
filter(
!is.na(cell_type),
grepl("sensory",super_class),
!grepl("^IN|^MN",cell_type),
grepl("^ORN|^THRN|^HRN|^TRN", cell_type) |
grepl("gustatory", cell_function, ignore.case = TRUE)
) %>%
distinct(cell_type, .keep_all = TRUE)
# Get pC1 neuron IDs for filtering
pc1_ids1 <- pc1_meta1 %>% pull(root_id)
pc1_ids2 <- pc1_meta2 %>% pull(root_id)
# Calculate influence from each sensory cell type to pC1 neurons
# Dataset 1 (BANC)
pc1_influence_list1 <- list()
for (i in seq_len(nrow(sens_meta1))) {
sensory_type <- sens_meta1$cell_type[i]
# Get all neurons of this cell type
sensory_ids <- meta1 %>%
filter(cell_type == sensory_type) %>%
pull(root_id)
if (length(sensory_ids) == 0) next
if (i %% 10 == 0 || i == nrow(sens_meta1)) {
}
# Calculate influence
influence_scores <- calculate_influence_py(ic_dataset1, sensory_ids) %>%
filter(id %in% pc1_ids1) %>%
mutate(
source_type = sensory_type,
dataset = "BANC"
) %>%
left_join(
pc1_meta1 %>% select(id = root_id, target_type = cell_type),
by = "id"
)
pc1_influence_list1[[sensory_type]] <- influence_scores
}
pc1_influence1 <- bind_rows(pc1_influence_list1)
# Dataset 2 (maleCNS)
pc1_influence_list2 <- list()
for (i in seq_len(nrow(sens_meta2))) {
sensory_type <- sens_meta2$cell_type[i]
# Get all neurons of this cell type
sensory_ids <- meta2 %>%
filter(cell_type == sensory_type) %>%
pull(root_id)
if (length(sensory_ids) == 0) next
if (i %% 10 == 0 || i == nrow(sens_meta2)) {
}
# Calculate influence
influence_scores <- calculate_influence_py(ic_dataset2, sensory_ids) %>%
filter(id %in% pc1_ids2) %>%
mutate(
source_type = sensory_type,
dataset = "maleCNS"
) %>%
left_join(
pc1_meta2 %>% select(id = root_id, target_type = cell_type),
by = "id"
)
pc1_influence_list2[[sensory_type]] <- influence_scores
}
pc1_influence2 <- bind_rows(pc1_influence_list2)
# Combine both datasets
pc1_influence_all <- bind_rows(pc1_influence1, pc1_influence2)
cat("\nTotal influence scores:", nrow(pc1_influence_all), "\n")
##
## Total influence scores: 21840
cat("Unique sensory types in BANC:", length(unique(pc1_influence1$source_type)), "\n")
## Unique sensory types in BANC: 117
cat("Unique sensory types in maleCNS:", length(unique(pc1_influence2$source_type)), "\n")
## Unique sensory types in maleCNS: 134
We can now visualise two heatmaps, one for BANC and one for maleCNS, of chemosensory influence onto pC1 neurons by cell type, and identify the sensory channels that matter most to them.
# Aggregate influence by source and target cell type
pc1_influence_ct <- pc1_influence_all %>%
group_by(dataset, source_type, target_type) %>%
summarise(
influence = sum(`Influence_score_(unsigned)`, na.rm = TRUE),
adjusted_influence = log(sum(influence, na.rm = TRUE)) + 24,
adjusted_influence = ifelse(is.infinite(adjusted_influence),0,adjusted_influence),
n_connections = n(),
.groups = "drop"
) %>%
filter(!is.na(source_type), !is.na(target_type))
# Create separate matrices for each dataset
create_pc1_matrix <- function(data, dataset_name) {
matrix_data <- data %>%
filter(dataset == dataset_name) %>%
select(source_type, target_type, adjusted_influence) %>%
pivot_wider(
names_from = target_type,
values_from = adjusted_influence,
values_fill = 0
) %>%
column_to_rownames("source_type") %>%
as.matrix()
# Clean matrix
matrix_data[is.na(matrix_data)] <- 0
matrix_data[is.infinite(matrix_data)] <- 0
matrix_data <- matrix_data[rowSums(matrix_data)>50,]
# Filter rows with minimal influence
if (nrow(matrix_data) > 0) {
matrix_data <- matrix_data[rowSums(matrix_data) > 10, , drop = FALSE]
}
return(matrix_data)
}
# Create matrices
pc1_matrix1 <- create_pc1_matrix(pc1_influence_ct, "BANC")
pc1_matrix2 <- create_pc1_matrix(pc1_influence_ct, "maleCNS")
# Min-max normalize by column (per pC1 target type)
normalize_matrix <- function(mat) {
if (nrow(mat) == 0 || ncol(mat) == 0) return(mat)
mat_norm <- apply(mat, MARGIN = 2,
FUN = function(x) {
if (max(x, na.rm = TRUE) == min(x, na.rm = TRUE)) {
return(rep(0, length(x)))
}
(x - min(x, na.rm = TRUE)) / (max(x, na.rm = TRUE) - min(x, na.rm = TRUE))
})
mat_norm[is.infinite(mat_norm)] <- 0
mat_norm[is.na(mat_norm)] <- 0
return(mat_norm)
}
pc1_matrix1_norm <- normalize_matrix(pc1_matrix1)
pc1_matrix2_norm <- normalize_matrix(pc1_matrix2)
# Create static heatmap for BANC
pheatmap(
pc1_matrix1_norm,
clustering_method = "ward.D2",
show_rownames = TRUE,
show_colnames = TRUE,
main = "BANC (Female): Chemosensory Influence on pC1 Neurons",
filename = file.path(img_dir, "banc_pc1_chemosensory_influence.png"),
width = 10,
height = 16,
fontsize = 8,
fontsize_row = 7,
fontsize_col = 9
)
# Create static heatmap for maleCNS
pheatmap(
pc1_matrix2_norm,
clustering_method = "ward.D2",
show_rownames = TRUE,
show_colnames = TRUE,
main = "maleCNS (Male): Chemosensory Influence on pC1/P1 Neurons",
filename = file.path(img_dir, "malecns_pc1_chemosensory_influence.png"),
width = 10,
height = 16,
fontsize = 8,
fontsize_row = 7,
fontsize_col = 9
)
# Create interactive side-by-side heatmaps using heatmaply
# For BANC
p_pc1_banc <- heatmaply(
pc1_matrix1_norm,
dendrogram = "row",
hclust_method = "ward.D2",
main = "BANC (Female): Chemosensory → pC1",
xlab = "Target: pC1 Neuron Type",
ylab = "Source: Sensory Type",
showticklabels = c(TRUE, TRUE),
hide_colorbar = FALSE,
fontsize_row = 7,
fontsize_col = 9,
key.title = "Normalized\nInfluence"
)
print(p_pc1_banc)
# For maleCNS
p_pc1_malecns <- heatmaply(
pc1_matrix2_norm,
dendrogram = "row",
hclust_method = "ward.D2",
main = "maleCNS (Male): Chemosensory → pC1/P1",
xlab = "Target: pC1/P1 Neuron Type",
ylab = "Source: Sensory Type",
showticklabels = c(TRUE, TRUE),
hide_colorbar = FALSE,
fontsize_row = 7,
fontsize_col = 9,
key.title = "Normalized\nInfluence"
)
print(p_pc1_malecns)
Let’s look at another example. Rather than calculating influence between sensors and a target population, let’s define a source population and calculate influence to effector neurons, i.e. motor and endocrine neurons.
The abdominal neuromere is a little-studied region of the fly central nervous system. Let’s see if we can break its neurons down into “functional modules” based on their possible divisions by motor control.
First, let’s look at our abdominal subset, and read the edgelist we have there to get the inventory of neurons we want to look at. We just want intrinsic neurons of the VNC that are also in the abdominal ganglion, so let’s remove any afferent or efferent neurons. This makes our source pool.
# Read abdominal ganglion edgelist to get neuron IDs
subset_name <- "abdominal_neuromere"
dataset_base <- sub("_[0-9]+$", "", dataset)
subset_dir <- file.path(data_path, dataset_base, subset_name)
abdominal_edgelist_path <- file.path(subset_dir, paste0(dataset, "_", subset_name, "_simple_edgelist.feather"))
# Read the edgelist for this subset
abdominal_edgelist <- read_feather_gcs(abdominal_edgelist_path, use_gcs = use_gcs)
## Authenticating with Google Cloud Storage...
## Reading from GCS: gs://sjcabs_2025_data/banc/abdominal_neuromere/banc_746_abdominal_neuromere_simple_edgelist.feather
## Downloading from GCS... (this may take several minutes for large files)
## Downloaded 7.5 MB from GCS
## Loading into memory with Arrow...
## ✓ Done! Loaded 274063 rows
# Get unique neuron IDs from pre and post columns
abdominal_ids <- unique(c(abdominal_edgelist$pre, abdominal_edgelist$post))
# Get metadata for these neurons
abdominal_neurons <- meta %>%
filter(!!sym(dataset_id) %in% abdominal_ids)
# Filter to intrinsic neurons (not afferent/efferent)
abdominal_source_neurons <- abdominal_neurons %>%
filter(is.na(flow) | !flow %in% c("afferent", "efferent"), !super_class %in% c("ascending","descending","sensory")) %>%
distinct(!!sym(dataset_id), cell_type, cell_class, cell_sub_class)
# Get their IDs
abdominal_source_ids <- abdominal_source_neurons %>%
pull(!!sym(dataset_id))
# Get all effector neurons (motor and endocrine)
effector_neurons <- meta %>%
filter(flow == "efferent") %>%
distinct(!!sym(dataset_id), cell_type, cell_class, cell_sub_class)
# Get effector IDs for filtering
effector_ids <- effector_neurons %>%
pull(!!sym(dataset_id))
However, we will actually want to use the full edgelist for calculating influence.
Let’s now calculate influence from each source neuron, to every effector neuron.
abdominal_source_cts <- abdominal_source_neurons %>%
filter(!is.na(cell_type)) %>%
distinct(cell_type) %>%
pull(cell_type)
# Calculate influence from each individual abdominal cell type
abdominal_influence_list <- list()
for (i in seq_along(abdominal_source_cts)) {
source_ct <- abdominal_source_cts[i]
source_ids <- abdominal_source_neurons %>%
filter(cell_type == source_ct) %>%
pull(!!sym(dataset_id))
# Progress indicator
if (i %% 50 == 0 || i == length(abdominal_source_cts)) {
}
# Calculate influence from this individual abdominal neuron
suppressMessages({
influence_scores <- calculate_influence_py(ic_dataset, source_ids) %>%
filter(id %in% effector_ids) %>%
mutate(source = source_ct) %>%
left_join(
abdominal_source_neurons %>%
distinct(cell_type, .keep_all = TRUE) %>%
select(source = cell_type, source_class = cell_class),
by = "source"
) %>%
left_join(
effector_neurons %>%
select(id = !!sym(dataset_id), target_type = cell_type, target_class = cell_class, target_sub_class = cell_sub_class),
by = "id"
)
})
abdominal_influence_list[[source_ct]] <- influence_scores
}
# Combine all results
abdominal_influence <- bind_rows(abdominal_influence_list)
We can again visualise this result as a heatmap, collapsing our sources by cell type, and recalculating our adjusted influence score (remember, influence is an additive metric! And then our adjusted influence is a log transform after that).
# Aggregate by source and target cell type
abdominal_influence_ct <- abdominal_influence %>%
rename(source_type = source) %>%
mutate(target_class = ifelse(is.na(target_class),super_class,target_class)) %>%
mutate(target_sub_class = ifelse(is.na(target_sub_class),target_class,target_sub_class)) %>%
group_by(source_type, target_sub_class) %>%
summarise(
influence = sum(`Influence_score_(unsigned)`, na.rm = TRUE),
adjusted_influence = log(sum(influence, na.rm = TRUE)) + 24,
adjusted_influence = ifelse(is.infinite(adjusted_influence),0,adjusted_influence),
n_targets = n(),
.groups = "drop"
) %>%
distinct(source_type, target_sub_class, .keep_all = TRUE) %>%
filter(!is.na(target_sub_class), !is.na(source_type))
# Create matrix for heatmap
abdominal_matrix <- abdominal_influence_ct %>%
select(source_type, target_sub_class, adjusted_influence) %>%
pivot_wider(names_from = target_sub_class, values_from = adjusted_influence, values_fill = 0) %>%
column_to_rownames("source_type") %>%
as.matrix()
abdominal_matrix[is.na(abdominal_matrix)] <- 0
abdominal_matrix[is.infinite(abdominal_matrix)] <- 0
abdominal_matrix <- abdominal_matrix[rowSums(abdominal_matrix) > 50, ]
# Min-max normalize by column (per effector type)
abdominal_matrix_norm <- apply(X = abdominal_matrix, MARGIN = 2,
FUN = function(x) (x - min(x, na.rm = TRUE)) / (max(x, na.rm = TRUE) - min(x, na.rm = TRUE)))
abdominal_matrix_norm[is.infinite(abdominal_matrix_norm)] <- 0
abdominal_matrix_norm[is.na(abdominal_matrix_norm)] <- 0
# Create static heatmap with pheatmap (saved to PNG)
pheatmap(
abdominal_matrix_norm,
clustering_method = "ward.D2",
show_rownames = FALSE,
show_colnames = TRUE,
main = "Abdominal Neuron Influence on Effector Neurons",
filename = file.path(img_dir, paste0(dataset, "_abdominal_influence_heatmap.png")),
width = 10,
height = 10
)
# Create interactive heatmap with Ward's clustering (no dendrograms shown)
p_abdominal_heatmap <- heatmaply(
abdominal_matrix_norm,
dendrogram = "none",
hclust_method = "ward.D2",
main = "Abdominal Neuron Influence on Effector Neurons",
xlab = "Target: Effector Neuron Type",
ylab = "Source: Abdominal Neuron Type",
showticklabels = c(TRUE, FALSE), # Hide axis text, show on hover
hide_colorbar = FALSE,
fontsize_row = 8,
fontsize_col = 8,
key.title = "Normalized\nInfluence"
)
p_abdominal_heatmap
Perfect, now let’ enrich this with influence from sensory sub classes. Here’s how to calculate that, very similar to in Example 1.
# Get sensory neurons by sub-class (from Example 1)
sensory_sub_classes <- sensory_neurons %>%
pull(cell_sub_class) %>%
unique() %>%
sort()
# Calculate influence for each sensory sub-class
sensory_effector_list <- list()
for (sensory_sub_class in sensory_sub_classes) {
# Get IDs for this sensory sub-class
sensory_ids <- sensory_neurons %>%
filter(cell_sub_class == sensory_sub_class) %>%
pull(!!sym(dataset_id))
if (length(sensory_ids) == 0) next
# Calculate influence
influence_scores <- calculate_influence_py(ic_dataset, sensory_ids) %>%
filter(id %in% abdominal_source_ids) %>%
left_join(
abdominal_source_neurons %>%
select(id = !!sym(dataset_id), target_type = cell_type, target_class = cell_class),
by = "id"
) %>%
mutate(source = sensory_sub_class, source_category = "sensory")
sensory_effector_list[[sensory_sub_class]] <- influence_scores
}
# Combine all results
sensory_abdominal_influence <- bind_rows(sensory_effector_list)
# Aggregate by source and target cell type
abdominal_influence_all <- sensory_abdominal_influence %>%
rename(source_type = target_type,
target_sub_class = source) %>%
group_by(source_type, target_sub_class) %>%
summarise(
influence = sum(`Influence_score_(unsigned)`, na.rm = TRUE),
adjusted_influence = log(sum(influence, na.rm = TRUE)) + 24,
adjusted_influence = ifelse(is.infinite(adjusted_influence),0,adjusted_influence),
n_targets = n(),
.groups = "drop"
) %>%
distinct(source_type, target_sub_class, .keep_all = TRUE) %>%
rbind(abdominal_influence_ct) %>%
filter(!is.na(source_type),
!is.na(target_sub_class),
adjusted_influence != 0)
Cool. Now let’s combine these normalised influence matrices. We can then make a UMAP based on the cosine similarity between abdominal cell types of the combined influence matrices, to reveal potential sensori-effector control. As in tutorial 03, we will use hierarchical clustering and centroid detection to colour and number these clusters.
# Create influence matrix (abdominal types × targets)
influence_matrix <- abdominal_influence_all %>%
select(source_type, target_sub_class, adjusted_influence) %>%
pivot_wider(
names_from = target_sub_class,
values_from = adjusted_influence,
values_fill = 0
) %>%
column_to_rownames("source_type") %>%
as.matrix()
influence_matrix[is.na(influence_matrix)] <- 0
influence_matrix[is.infinite(influence_matrix)] <- 0
influence_matrix <- influence_matrix[rowSums(influence_matrix) > 0, ]
# Run UMAP
umap_result <- uwot::umap(
influence_matrix,
n_neighbors = min(15, nrow(influence_matrix) - 1),
min_dist = 0.1,
metric = "euclidean",
n_threads = 1
)
# Create data frame with UMAP coordinates
umap_df <- data.frame(
source_type = rownames(influence_matrix),
UMAP1 = umap_result[, 1],
UMAP2 = umap_result[, 2]
)
# Perform hierarchical clustering
dist_matrix <- dist(umap_result, method = "euclidean")
hc <- hclust(dist_matrix, method = "ward.D2")
# Dynamic tree cutting
if (require(dynamicTreeCut, quietly = TRUE)) {
dynamic_clusters <- cutreeDynamic(
hc,
distM = as.matrix(dist_matrix),
deepSplit = 2,
minClusterSize = max(3, round(nrow(umap_df) * 0.05))
)
} else {
# Fallback: cut tree at fixed height
dynamic_clusters <- cutree(hc, k = min(8, ceiling(nrow(umap_df) / 5)))
}
## ..cutHeight not given, setting it to 90.8 ===> 99% of the (truncated) height range in dendro.
## ..done.
umap_df$unordered_cluster <- factor(dynamic_clusters)
# Calculate centroids of clusters
centroids <- umap_df %>%
group_by(unordered_cluster) %>%
summarize(
UMAP1_centroid = mean(UMAP1),
UMAP2_centroid = mean(UMAP2),
size = n()
)
print(centroids %>% arrange(desc(size)))
## # A tibble: 7 × 4
## unordered_cluster UMAP1_centroid UMAP2_centroid size
## <fct> <dbl> <dbl> <int>
## 1 1 -0.495 -2.55 105
## 2 2 0.0272 4.71 102
## 3 3 3.13 -0.0833 99
## 4 4 -3.49 -2.24 88
## 5 5 1.39 1.98 87
## 6 6 0.157 -0.317 54
## 7 7 -1.90 -3.68 44
# Calculate pairwise distances between centroids
dist_centroids <- dist(centroids[, c("UMAP1_centroid", "UMAP2_centroid")],
method = "euclidean")
# Order clusters based on hierarchical clustering of centroids
hc_centroids <- hclust(dist_centroids, method = "ward.D2")
dd <- as.dendrogram(hc_centroids)
ordered_cluster <- 1:length(order.dendrogram(dd))
names(ordered_cluster) <- order.dendrogram(dd)
# Map original cluster numbers to new ordered cluster numbers
umap_df$cluster <- ordered_cluster[as.character(umap_df$unordered_cluster)]
umap_df$cluster <- factor(umap_df$cluster, levels = sort(unique(umap_df$cluster)))
# Create color palette
n_clusters <- length(unique(umap_df$cluster))
cluster_colors <- colorRampPalette(c("#E41A1C", "#377EB8", "#4DAF4A",
"#984EA3", "#FF7F00", "#FFFF33"))(n_clusters)
names(cluster_colors) <- sort(unique(umap_df$cluster))
# Calculate cluster centroids for labeling
cluster_centroids <- umap_df %>%
group_by(cluster) %>%
summarise(
UMAP1 = mean(UMAP1),
UMAP2 = mean(UMAP2),
n = n()
)
# Plot UMAP colored by cluster
p_abdominal_umap <- ggplot(umap_df, aes(x = UMAP1, y = UMAP2, color = cluster,
text = paste0("Type: ", source_type,
"\nCluster: ", cluster))) +
geom_point(alpha = 0.7, size = 3) +
scale_color_manual(values = cluster_colors) +
geom_text(
data = cluster_centroids,
aes(x = UMAP1, y = UMAP2, label = cluster),
color = "black",
size = 6,
fontface = "bold",
inherit.aes = FALSE
) +
labs(
title = "Abdominal Neuron Functional Modules",
subtitle = sprintf("%d modules based on influence to sensory and effector neurons", n_clusters)
) +
theme_void() +
theme(
plot.title = element_text(face = "bold", size = 14, hjust = 0.5),
plot.subtitle = element_text(size = 11, hjust = 0.5),
legend.position = "none"
)
save_plot(p_abdominal_umap, paste0(dataset, "_abdominal_functional_modules"))
ggplotly(p_abdominal_umap, tooltip = "text")
Now let’s make a static facet plot, where we visualise influence from the top 25 source types!
# Work out the top 25 source types by total influence
top25_sources <- abdominal_influence_all %>%
group_by(target_sub_class) %>%
summarise(total_influence = sum(influence, na.rm = TRUE), .groups = "drop") %>%
arrange(desc(total_influence)) %>%
head(25) %>%
pull(target_sub_class)
print(top25_sources)
## [1] "abdomen_motor_neuron"
## [2] "abdominal_terminalia_bristle"
## [3] "abdomen_orphan_neuron"
## [4] "abdominal_ppk_neuron"
## [5] "abdomen_neurosecretory_cell"
## [6] "hind_leg_motor_neuron"
## [7] "abdomen_multidendritic_neuron"
## [8] "abdominal_wall_multidendritic_neuron"
## [9] "reproductive_tract_neurosecretory_cell"
## [10] "middle_leg_motor_neuron"
## [11] "wing_base_campaniform_sensillum_neuron"
## [12] "posterior_uterine_sensory_neuron"
## [13] "haltere_campaniform_sensillum_neuron"
## [14] "hind_leg_bristle_neuron"
## [15] "wing_steering_motor_neuron"
## [16] "front_leg_motor_neuron"
## [17] "ureter_neurosecretory_cell"
## [18] "haltere_steering_neuron"
## [19] "thoracic_abdominal_segmental_motor_neuron"
## [20] "haltere_bristle_neuron"
## [21] "hind_leg_hair_plate_neuron"
## [22] "haltere_chordotonal_organ_neuron"
## [23] "unknown_thoracic_abdominal_motor_neuron"
## [24] "wing_power_motor_neuron"
## [25] "haltere_power_neuron"
# For each source type, calculate total adjusted influence per abdominal neuron
source_influence_summary <- abdominal_influence_all %>%
filter(target_sub_class %in% top25_sources) %>%
group_by(source_type, target_sub_class) %>%
summarise(
total_adjusted_influence = sum(adjusted_influence, na.rm = TRUE),
.groups = "drop"
)
# Join with UMAP coordinates
# Note: umap_df has source_type as the abdominal types being analyzed
# We need to create a matrix where each row is an abdominal type and columns are the top25 sources
influence_for_plot <- abdominal_influence_all %>%
filter(target_sub_class %in% top25_sources) %>%
select(source_type, target_sub_class, adjusted_influence) %>%
complete(source_type, target_sub_class, fill = list(adjusted_influence = 0)) %>%
left_join(
umap_df %>% select(source_type, UMAP1, UMAP2, cluster),
by = "source_type"
) %>%
filter(!is.na(UMAP1)) %>%
group_by(target_sub_class) %>%
mutate(adjusted_influence_minmax = (adjusted_influence-min(adjusted_influence,na.rm=TRUE))/(max(adjusted_influence,na.rm=TRUE)-min(adjusted_influence,na.rm=TRUE))) %>%
ungroup()
# Create faceted UMAP plot
p_facet_umap <- ggplot(influence_for_plot,
aes(x = UMAP1, y = UMAP2, color = adjusted_influence_minmax)) +
geom_point(size = 2, alpha = 0.8) +
scale_color_gradient2(
low = "blue",
mid = "white",
high = "red",
midpoint = median(influence_for_plot$adjusted_influence_minmax, na.rm = TRUE),
name = "Adjusted\nInfluence"
) +
facet_wrap(~ target_sub_class, ncol = 5) +
labs(
title = "Abdominal Neuron Influence Patterns",
subtitle = "Each panel shows influence to a different sensory/effector class"
) +
theme_minimal() +
theme(
plot.title = element_text(face = "bold", size = 16, hjust = 0.5),
plot.subtitle = element_text(size = 12, hjust = 0.5),
strip.text = element_text(face = "bold", size = 10),
strip.background = element_rect(fill = "grey90", color = "grey50"),
axis.title = element_text(size = 10),
axis.text = element_blank(),
axis.ticks = element_blank(),
panel.grid = element_blank()
)
save_plot(p_facet_umap, paste0(dataset, "_abdominal_influence_facets"),
width = 14, height = 14)
print(p_facet_umap)
We have successfully decomposed our neurons into potential “functional” modules, which would bear further investigation.
In this tutorial, you learned how to:
The influence metric provides a powerful way to understand how signals propagate through neural circuits beyond direct synaptic connections.
sessionInfo()
## R version 4.2.1 (2022-06-23)
## Platform: x86_64-apple-darwin17.0 (64-bit)
## Running under: macOS Big Sur ... 10.16
##
## Matrix products: default
## BLAS: /Library/Frameworks/R.framework/Versions/4.2/Resources/lib/libRblas.0.dylib
## LAPACK: /Library/Frameworks/R.framework/Versions/4.2/Resources/lib/libRlapack.dylib
##
## locale:
## [1] en_GB.UTF-8/en_GB.UTF-8/en_GB.UTF-8/C/en_GB.UTF-8/en_GB.UTF-8
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] reticulate_1.34.0 influencer_0.1.0 remotes_2.5.0
## [4] doSNOW_1.0.20 snow_0.4-4 iterators_1.0.14
## [7] foreach_1.5.2 readobj_0.4.1 dynamicTreeCut_1.63-1
## [10] lsa_0.73.3 SnowballC_0.7.0 tidygraph_1.2.3
## [13] ggraph_2.2.1 igraph_1.5.1 htmlwidgets_1.6.4
## [16] uwot_0.1.14 Matrix_1.6-1.1 umap_0.2.10.0
## [19] pheatmap_1.0.12 heatmaply_1.4.0 viridis_0.6.5
## [22] viridisLite_0.4.2 ggdendro_0.1.23 duckdb_0.9.2-1
## [25] DBI_1.2.3 plotly_4.11.0 nat.flybrains_1.8.2
## [28] nat.templatebrains_1.2.1 nat.nblast_1.6.7 nat_1.11.0
## [31] rgl_1.2.8 patchwork_1.1.3 forcats_0.5.2
## [34] stringr_1.6.0 dplyr_1.1.4 purrr_1.1.0
## [37] readr_2.1.5 tidyr_1.3.1 tibble_3.3.0
## [40] ggplot2_4.0.0.9000 tidyverse_1.3.2 arrow_16.1.0
##
## loaded via a namespace (and not attached):
## [1] readxl_1.4.1 backports_1.5.0 spam_2.10-0
## [4] systemfonts_1.2.3 plyr_1.8.9 lazyeval_0.2.2
## [7] crosstalk_1.2.0 digest_0.6.37 ca_0.71.1
## [10] htmltools_0.5.8.1 magrittr_2.0.4 memoise_2.0.1
## [13] googlesheets4_1.1.1 tzdb_0.4.0 graphlayouts_1.1.1
## [16] modelr_0.1.11 extrafont_0.18 extrafontdb_1.0
## [19] askpass_1.2.1 blob_1.2.4 rvest_1.0.3
## [22] rappdirs_0.3.3 ggrepel_0.9.5 textshaping_0.3.6
## [25] haven_2.5.1 xfun_0.54 crayon_1.5.3
## [28] jsonlite_2.0.0 glue_1.8.0 polyclip_1.10-4
## [31] registry_0.5-1 gtable_0.3.6 gargle_1.6.0
## [34] webshot_0.5.5 Rttf2pt1_1.3.11 scales_1.4.0
## [37] Rcpp_1.0.11 bit_4.6.0 dotCall64_1.1-1
## [40] httr_1.4.7 FNN_1.1.3.1 RColorBrewer_1.1-3
## [43] nabor_0.5.0 pkgconfig_2.0.3 farver_2.1.2
## [46] sass_0.4.8 dbplyr_2.2.1 utf8_1.2.6
## [49] labeling_0.4.3 reshape2_1.4.4 tidyselect_1.2.1
## [52] rlang_1.1.6 cellranger_1.1.0 tools_4.2.1
## [55] cachem_1.1.0 cli_3.6.5 generics_0.1.4
## [58] RSQLite_2.3.4 broom_1.0.6 evaluate_1.0.5
## [61] fastmap_1.2.0 ragg_1.2.4 yaml_2.3.10
## [64] knitr_1.50 bit64_4.6.0-1 fs_1.6.3
## [67] filehash_2.4-6 dendroextras_0.2.3 nat.utils_0.6.1
## [70] dendextend_1.17.1 xml2_1.3.6 compiler_4.2.1
## [73] rstudioapi_0.17.1 png_0.1-8 reprex_2.0.2
## [76] tweenr_2.0.2 bslib_0.6.1 stringi_1.8.3
## [79] RSpectra_0.16-1 lattice_0.20-45 vctrs_0.6.5
## [82] pillar_1.11.1 lifecycle_1.0.4 jquerylib_0.1.4
## [85] data.table_1.16.2 seriation_1.4.0 R6_2.6.1
## [88] TSP_1.2-1 gridExtra_2.3 codetools_0.2-18
## [91] dichromat_2.0-0.1 MASS_7.3-58.1 assertthat_0.2.1
## [94] openssl_2.3.4 withr_3.0.2 parallel_4.2.1
## [97] hms_1.1.3 grid_4.2.1 rmarkdown_2.30
## [100] S7_0.2.0 googledrive_2.1.1 ggforce_0.4.1
## [103] lubridate_1.8.0 base64enc_0.1-3